Hybrid-attention guided network with multiple resolution features for person re-identification

نویسندگان

چکیده

Extracting effective and discriminative features is highly important for addressing the challenges of person re-identification (re-ID). At present, deep convolutional neural networks typically use high-level to identify pedestrians. However, some essential spatial information contained in low-level will be lost during learning feature. Most existing re-ID methods rely primarily on hand-crafted bounding boxes , which images are precisely aligned. these approaches unrealistic practical applications due inaccuracy automatic detection algorithms . To address problems, we propose a hybrid-attention guided network with multiple resolution re-ID. First, construct multi-resolution fusion strategy ensure that can spatially aligned feature fusion, while at same time ensuring discriminability after fusion. We then introduce an attention mechanism multi-granularity operation reduce impact irregular by gently expanding size maps. In addition, new multi-pool extractor designed obtain different types using two pools, enabling representation capability further improved. Extensive experiments demonstrate superiority our approach. Our code available https://github.com/libraflower/MutipleFeature-for-PRID.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Harmonious Attention Network for Person Re-Identification

Existing person re-identification (re-id) methods either assume the availability of well-aligned person bounding box images as model input or rely on constrained attention selection mechanisms to calibrate misaligned images. They are therefore sub-optimal for re-id matching in arbitrarily aligned person images potentially with large human pose variations and unconstrained auto-detection errors....

متن کامل

Learning Discriminative Features with Multiple Granularities for Person Re-Identification

The combination of global and partial features has been an essential solution to improve discriminative performances in person re-identification (Re-ID) tasks. Previous part-based methods mainly focus on locating regions with specific pre-defined semantics to learn local representations, which increases learning difficulty but not efficient or robust to scenarios with large variances. In this p...

متن کامل

Saliency Weighted Features for Person Re-identification

In this work we propose a novel person re-identification approach. The solution, inspired by human gazing capabilities, wants to identify the salient regions of a given person. Such regions are used as a weighting tool in the image feature extraction process. Then, such novel representation is combined with a set of other visual features in a pairwise-based multiple metric learning framework. F...

متن کامل

Deep-Person: Learning Discriminative Deep Features for Person Re-Identification

Recently, many methods of person re-identification (ReID) rely on part-based feature representation to learn a discriminative pedestrian descriptor. However, the spatial context between these parts is ignored for the independent extractor on each separate part. In this paper, we propose to apply Long Short-Term Memory (LSTM) in an end-to-end way to model the pedestrian, seen as a sequence of bo...

متن کامل

Hierarchical Cross Network for Person Re-identification

Person re-identification (person re-ID) aims at matching target person(s) grabbed from different and non-overlapping camera views. It plays an important role for public safety and has application in various tasks such as, human retrieval, human tracking, and activity analysis. In this paper, we propose a new network architecture called Hierarchical Cross Network (HCN) to perform person re-ID. I...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Information Sciences

سال: 2021

ISSN: ['0020-0255', '1872-6291']

DOI: https://doi.org/10.1016/j.ins.2021.07.058